Journal of Cognitive Neuroscience
● MIT Press
Preprints posted in the last 30 days, ranked by how well they match Journal of Cognitive Neuroscience's content profile, based on 119 papers previously published here. The average preprint has a 0.06% match score for this journal, so anything above that is already an above-average fit.
Yang, J.; Carter, O.; Shivdasani, M. N.; Grayden, D. B.; Hester, R.; Barutchu, A.
Show abstract
Selective attention enables the prioritization of task-relevant information while managing distractors, and steady-state visual evoked potentials (SSVEPs) are widely used to track this process by tagging different visual objects at distinct flicker frequencies. However, whether the choice of tagging frequency itself influences other neural and cognitive measures remains unclear. Here, 27 participants performed detection and 1-back working memory tasks while a central target and peripheral distractors flickered at either 8.6 Hz or 12 Hz. The working memory task produced slower responses, more errors, and greater perceived difficulty than detection. Tagging frequency strongly shaped neural responses, with 8.6 Hz eliciting higher SSVEP signal-to-noise ratios than 12 Hz regardless of stimulus location. Nevertheless, stronger SSVEP responses for centrally attended stimuli were associated with fewer working memory errors and larger early visual ERP responses, while SSVEPs for attended and distractor stimuli were negatively correlated. In addition, the working memory task produced a larger P1-N1 peak-to-peak difference, and tagging frequency altered the timing and amplitude of early ERP effects. Together, these findings show that tagging frequency is not a neutral methodological parameter, but one that shapes both neural indices of attention and their relationship to cognitive performance.
Kalburge, I.; Dallstream, A.; Josic, K.; Kilpatrick, Z. P.; Ding, L.; Gold, J. I.
Show abstract
Decisions based on evidence accumulated over time require rules governing when to end the accumulation process and commit to a choice. These rules control inherent trade-offs between decision speed and accuracy, which require careful balance to maximize quantities that depend on both like reward rate. We previously showed that, to maximize reward rate, normative decision rules adapt to changing task conditions (Barendregt et al., 2022). Here we used a novel task to examine whether and how people use adaptive rules for individual decisions under a variety of conditions, including changes in decision outcomes across trials and changes in evidence quality both across and within trials. We found that the participants tended to use rules that adjusted, at least partially, to predictable changes in task conditions to improve reward rate, consistent with a rationally bounded implementation of normative principles. These findings help inform our understanding of the extent and limits of flexible decision formation in the brain.
Vivion, M.; Mathy, F.; Guida, A.; Mondot, L.; Ramanoel, S.
Show abstract
Spatialization in working memory refers to the spatial coding of non-spatial information along a mental horizontal line when encoding verbal material. This phenomenon is thought to support working memory by facilitating order encoding. Although it has been observed for both visually and auditorily presented stimuli, no direct comparison has yet examined whether these modalities rely on similar neural mechanisms. In this study, we investigated whether spatialization in visual and auditory modalities involves shared or distinct patterns of activity within the working-memory network. Forty-nine participants performed both a visual and an auditory working memory SPoARC task of the same verbal material, allowing to study the cortical patterns associated with distinct serial positions at both encoding and recognition across sensory modalities. Whole-brain analyses revealed similar frontoparietal networks across conditions. In addition, a representational similarity analysis (RSA) was conducted to assess the similarity of neural patterns between early and late serial positions in a sequence and across sensory modalities. This multivoxel pattern analysis revealed modality-dependent patterns distinguishing early and late positions in the inferior frontal gyrus. Additional modality-specific effects were observed in the anterior intraparietal sulcus in the visual modality and in the posterior hippocampus in the auditory modality. Drawing on the framework proposed by Bottini & Doeller (2020), we propose that order decoding in the IPS might reflect a low-dimensional spatial coding of order (e.g., along a horizontal axis), whereas order decoding in the hippocampus might reflect higher-dimensional spatial representations or temporal representations.
Bair, M. B.; Long, N. M.
Show abstract
It is critical to identify which factors induce specific brain states as these large-scale patterns of coordinated neural activity drive downstream processing and behavior. The retrieval state, a brain state engaged when attempting to retrieve the past, is thought to specifically support episodic memory, remembering experiences within a spatiotemporal context, as opposed to semantic memory, remembering general knowledge. However, we hypothesize that the retrieval state reflects internal attention engaged to access stored episodic and semantic information. To test these alternatives, we recorded scalp electroencephalography while participants made episodic, semantic, or perceptual judgments, and applied an independently validated mnemonic state classifier to measure retrieval state engagement. We found that retrieval state engagement was greater for both episodic and semantic judgments compared to perceptual judgments. These findings suggest that the retrieval state reflects a domain-general internal attention process that supports not just episodic memory, but internally directed cognition.
Figarola, V.; Liang, W.; Luthra, S.; Parker, E.; Winn, M.; Brown, C.; Shinn-Cunningham, B. G.
Show abstract
Listeners face many challenges when trying to maintain attention to a target source in everyday settings; for instance, reverberation distorts acoustic cues and interruptions capture attention. However, little is known about how these challenges affect the ability to maintain selective attention. Here, we measured syllable recall accuracy and pupil dilation during a spatial selective attention task that was sometimes disrupted. Participants heard two competing, temporally interleaved syllable streams presented in pseudo-anechoic or reverberant environments. On randomly selected trials, a sudden interruption occurred mid-sequence. Compared to anechoic trials, reverberant performance was worse overall, and the interrupter disrupted performance. In uninterrupted trials, reverberation reduced peak pupil dilation both when it was consistent across all stimuli in a block and when it was randomized trial to trial, suggesting temporal smearing reduced clarity of the scene and the salience of events in the ongoing streams. Pupil dilations in response to interruptions indicated perceptual salience was strong across reverberant and anechoic conditions. Specifically, baseline pupil size before trials did not vary across room conditions, and mixing or blocking of trials (altering stimulus expectations) had no impact on pupillary responses. Together, these findings highlight that stimulus salience drives cognitive load more strongly than does task performance.
Issar, D.; Skog, E. E.; Grigg, M.; Kainerstorfer, J. M.; Smith, M. A.
Show abstract
Reaction time is a measure of the speed of our response to stimuli in the environment. Even for a well-trained task, a subjects reaction time varies. One source of this variability is internal state fluctuations (such as changes in arousal). There are few studies that systematically quantify the extent to which reaction time varies across different timescales and link this to measures of systemic physiology associated with arousal. In much of the literature, it is assumed but not demonstrated that behavioral and systemic measurements associated with arousal will be consistently linked because both estimate a common underlying arousal process. In this work, we examined this assumption by simultaneously measuring reaction time, heart rate, and pupil diameter in rhesus macaque monkeys performing several visual tasks over hours and across hundreds of sessions. We found a portion of the variability in reaction time could be linked to systemic physiological signatures of arousal on fast timescales from second to second and slower timescales from minute to minute. This link between reaction time and systemic physiology was also present for different biomarkers of arousal (heart rate and pupil). However, the strength of this relationship varied depending on the arousal biomarker. Our findings support the conclusion that there are multiple arousal mechanisms that act simultaneously to influence behavior and multiple timescales at which they operate.
Ruffino, C.; Jacquet, T.; Lepers, R.; Papaxanthis, C.; Truong, C.
Show abstract
Mental fatigue is known to impair cognitive and motor performance, but its impact on motor learning remains unclear. This study examined how mental fatigue affects skill acquisition in a sequential finger-tapping task. Twenty-eight participants were assigned to either a mental fatigue group, which completed a thirty-minute Stroop task, or a control group, which watched a documentary of equivalent duration. Both groups then trained on the finger-tapping task across multiple practice blocks with brief rest periods. Overall motor skill improved similarly in both groups. However, mental fatigue altered the pattern of acquisition: participants in the fatigue group showed decreased performance during practice blocks, which was compensated by larger gains during inter-block rest periods. A strong negative correlation was observed between online decrements and offline improvements, indicating that greater declines during practice were associated with larger gains during rest. This study highlights the critical role of rest periods in maintaining learning under cognitively demanding conditions and provides insight into how internal states, such as mental fatigue, can selectively influence the expression of performance without compromising overall learning.
Wang, P.; Schoenfeld, M. J.; Maye, A.; Daume, J.; Schneider, T. R.; Engel, A. K.
Show abstract
Predicting the time point when an event will occur is fundamental for adaptive behavior, yet it remains unresolved whether temporal prediction can be influenced by low-frequency rhythmic modulation of sensory stimuli. Here, we tested whether external rhythmic sensory stimulation at a frequency in the delta range (0.5 - 3 Hz) alters performance in a visual temporal prediction task. Participants judged whether a moving visual stimulus reappeared too early or too late after disappearing behind an occluder, while the temporal structure of crossmodal sensory input was manipulated across two behavioral sessions. Results indicated that in the visual-auditory conditions, oscillatory stimulation in either the visual or auditory modality improved performance, whereas decaying sensory intensity over time impaired performance. In visual-tactile conditions, oscillatory visual stimulation also enhanced sensitivity, but rhythmic tactile stimulation did not produce a comparable benefit in performance. Critically, tactile stimulation improved performance only when aligned to the expected disappearance of the visual stimulus, demonstrating that the phase relationship between sensory input and intrinsic delta oscillations is behaviorally relevant. Together, these findings indicate that temporal prediction depends on the temporal structure of sensory input and support the relevance of delta-band oscillations in predictive behavior across and within sensory modalities. Hence, rhythmic modulation of sensory stimuli may provide a tool to enhance temporal prediction accuracy by stimulating oscillatory neural dynamics.
Srokova, S.; Barnes, C. A.; Ekstrom, A.
Show abstract
Current evidence suggests that older adults perform worse at tasks involving spatial memory and navigation, yet the underlying reasons remain unclear. Here, we tested the hypothesis that age-related declines in spatial memory stem from difficulties in recognizing spatial environments from rotated perspectives. Young and older adults underwent fMRI as they encoded virtual scenes which were later viewed either from the same or rotated perspective. Older adults were worse at identifying changes in these scenes, although the age effect was equally robust across perspective conditions. Neural specificity of scene representations was examined with the phenomenon of fMRI repetition adaptation. We predicted that young adults would show significant fMRI adaptation to the same but not rotated perspective, indicative of intact viewpoint specificity, while older adults show would adaptation effects to both. While analyses of raw fMRI BOLD produced results consistent with these predictions, follow-up analyses revealed a general attenuation of activity in older adults across both perspective conditions. Additionally, although older adults showed both lower fMRI BOLD and worse spatial memory, lower trial-wise BOLD was associated with better performance independent of age. This suggests that the variance associated with fMRI adaptation is reflective of two independent sources of variance: age and cognition. Our results suggest that age differences in spatial memory may manifest due to cognitive and neural factors that are shared across same and rotated perspectives, and thus they cannot be explained by a selective deficit in allocentric (viewpoint-independent) processing. Significance StatementIncreasing age is often associated with reduced spatial memory and navigation. Prior research suggests that age differences in spatial memory could be exacerbated by changes in perspective, possibly due to increased difficulties in the ability to construct allocentric (viewpoint-independent) representations from previously encoded egocentric perspectives. Here, we demonstrate that older adults are equally disadvantaged when recognizing layouts across same and rotated perspectives. FMRI analyses indicate that older age is associated with reduced fMRI BOLD in higher-level visual cortex across both perspective conditions, as opposed to altered specificity of perspective coding. Consequently, the present study challenges the notion that aging is associated with a selective decline in allocentric spatial memory and instead supports a more general age-related difficulty with scene processing.
Annicchiarico, G.; Belluardo, M.; Vallortigara, G.; Ferrari, P. F.
Show abstract
Humans order numbers in space from left to right, with smaller quantities represented preferentially in the left hemispace and larger ones in the right hemispace. The direction of this mental number line (MNL), or more generally of number-space associations (NSA), is influenced by cultural habits such as reading and writing direction. However, a growing body of evidence from pre-verbal infants and non-human animals suggests that number-space mappings may also have biological foundations. In non-human primates, evidence for a directional MNL remains mixed, partly due to small sample sizes and methodological heterogeneity. Here, we tested samples of rhesus (Macaca mulatta) and crab-eating macaques (Macaca fascicularis) across two experiments using spontaneous food-related tasks. In Experiment 1, monkeys chose between identical food quantities (1x1 to 24x24) presented on the left and right. No systematic spatial choice bias emerged as a function of numerical magnitude, and hand use did not differ across exact numerical pairs, although exploratory analyses revealed magnitude-related modulations of manual responses. In Experiment 2, monkeys were habituated to small (4x4) or large (16x16) quantities and subsequently tested with the alternative quantity. Result showed significantly more leftward choices following numerical decreases (16[->]4) and more rightward choices following numerical increases (4[->]16), indicating that relative numerical context, rather than absolute magnitude, elicited directional spatial biases. These findings suggest that in macaques, number-space associations emerge most robustly in comparative contexts involving expectancy violations of magnitude.
Billot, A.; Varkanitsa, M.; Jhingan, N.; Carvalho, N.; Falconer, I.; Small, H.; Ryskin, R.; Blank, I.; Fedorenko, E.; Kiran, S.
Show abstract
The mechanisms of aphasia recovery following left-hemisphere stroke remain debated. Two broad hypotheses have been proposed for how recovery occurs when specialized systems, such as the language system, are affected by brain damage: i) recovery depends on the remaining components of the language system; and ii) recovery depends on functional remapping in brain areas outside of the language system. A key candidate for such takeover of language function is the Multiple Demand (MD) system--an extensive bilateral network that supports executive functions and is associated with the ability to flexibly adapt to task goals. The theoretical premise is that this system is capable of a wide range of cognitive tasks and can potentially be repurposed for language when specialized resources are no longer sufficient. We used precision functional MRI to evaluate these two hypotheses about aphasia recovery in 37 individuals (mean age = 58.3, SD = 8.4) with chronic aphasia due to a single left-hemisphere stroke, along with 38 age-matched controls (mean age = 61.6, SD = 9.2). Participants performed extensively validated functional localizers to identify the language network and the MD network within individuals. Participants with aphasia additionally completed extensive behavioral assessments that evaluated linguistic and executive skills. We first examined responses during language processing--audio-visual speech comprehension and reading--in each of the two networks, and then we related activity and functional connectivity measures from the two networks to linguistic ability. Our results do not support the hypothesis of drastic reorganization of the language system in the form of co-opting parts of the MD system in chronic aphasia. First, the language network and the MD network remain robustly dissociated: the language network responds strongly and selectively to language across modalities (left-hemisphere language regions: pFDR < 0.003), and no MD region shows increased activation during language comprehension relative to controls (pFDR > 0.24). Second, functional connectivity analyses reveal no evidence for increased integration between the two networks during language processing. Third, linguistic ability, as measured by an extensive behavioral battery of tests, is associated with the strength of activity and functional connectivity within the language network, but not within the MD network. Although we cannot rule out a role for the MD network in aphasia recovery during the acute and subacute phases or in more severely impaired patients, it appears that during the chronic phase, language comprehension relies on the same specialized network as prior to the injury.
Allen, S. C.; Koukouvinis, S.; Varjopuro, S. M.; Keitel, A.
Show abstract
Cortical tracking of acoustic features is essential for the neural processing of continuous stimuli such as speech and music. For example, it has been shown that children with dyslexia show atypical cortical tracking. This tracking may therefore reflect a fundamental auditory temporal processing mechanism supporting literacy more generally. In the current pre-registered study, we tested the hypothesis that cortical tracking of speech and music predicts reading ability in healthy young adults (N = 32), evaluated through a lexical decision task. Participants first completed an online session in which they performed a lexical decision task to assess their reading skills. This was followed by an electroencephalography (EEG) session, in which participants listened to a naturalistic short story and a music track. Using mutual information, we showed that neural activity aligned to both speech and music across a wide range of frequencies. Interestingly, cortical tracking was stronger for speech at very low frequencies, while it was stronger for music at higher frequencies. Critically, cortical tracking predicted reaction times in the lexical decision task in a frequency-dependent manner: stronger delta-band tracking (~1-3 Hz) for both speech and music was associated with faster reaction times, whereas stronger alpha-band tracking (~12 Hz) for speech was associated with slower reaction times. These findings remained significant even when controlling for stimulus type, age, musical experience and reading enjoyment. These results suggest that cortical tracking of speech and music reflect a domain-general temporal processing mechanism that is associated with reading ability beyond stimulus-specific features, and beyond development. These findings advance the neurobiological underpinnings of literacy and could potentially be leveraged for developing new reading interventions.
Eltas, Z.; Tunca, M. B.; Urgen, B. A.
Show abstract
Perceiving the direction of observed actions is critical for interpreting intentions and guiding social interaction. While direction selectivity has been extensively studied with simple stimuli such as dots, gratings, or point-light displays (PLDs), little is known about how the brain encodes direction in naturalistic, repetitive actions that are seen frequently in daily life. The present fMRI study investigated direction-selective representations during observation of complex actions performed along three bidirectional dimensions (left-right, up-down, front-back) within a 96-video stimulus set. The brain activity was analyzed using multivariate pattern analysis (MVPA) and multiple regression representational similarity analysis (RSA). MVPA revealed above-chance classification of action direction across occipital, parietal, and motor cortices, with the highest decoding in occipital, primary motor, and somatosensory regions. Crucially, RSA demonstrated that when accounting for low-level and motor features, direction information was still represented in early visual cortex, occipito-temporal areas, parietal regions, and motor-related regions. These findings indicate that action direction is represented across multiple levels of the action observation network (AON), extending from early sensory regions to higher-order parietal and frontal cortices. By using naturalistic, repetitive action videos, this study provides new evidence that the coding of action direction in the human brain is broadly distributed, reflecting the complexity of perceiving actions in everyday life. These findings suggest that direction selectivity is a core feature of the action observation network, linking basic motion processing with higher-level action understanding.
Tomasetig, G.; Sacheli, L. M.; Musco, M. A.; Pizzi, S.; Basso, G.; Spitoni, G. F.; Bottini, G.; Pizzamiglio, L.; Paulesu, E.
Show abstract
Humanity has always admired and created artwork, but the neurocognitive mechanisms behind artistic experience are still elusive. Professional artists and their intimate relationship with their artworks provide a unique opportunity to study the nature of art experience due to their expertise in both art making and art appreciation. During two fMRI tasks, professional artists (N=20) made aesthetic judgments on their own and other artists paintings (aesthetic appreciation task); they also mentally reconstructed the moments when they conceived their artworks or, as a control condition, when they visited now-familiar places for the first time (reconstruction by imagery task). During art appreciation of their own (as compared to other artists) paintings, participants showed stronger recruitment of bilateral posterior parietal cortices, the left lateral occipitotemporal cortex, and the dorso-central sector of the right insula, that is, action-related brain regions also involved in encoding the emotional components of movements. The reconstruction of their own artistic creation (as compared to episodic memory retrieval) involved the left fronto-parietal network associated with motor cognition. Altogether, these results suggest that the mental representations of the actions involved in creating art are integral to the overall artistic experience of painters, supporting an embodied view of the artists experience of art.
Xue, A. M.; Hsu, S.; LaRocque, K. F.; Raccah, O. M.; Gonzalez, A.; Parvizi, J.; Wagner, A. D.
Show abstract
Episodic memory depends on neural representations encoded in the hippocampus. Experimental and computational evidence suggests that the hippocampus encodes pattern-separated representations that support later recall of episodic event elements. While extant data in humans predominantly focus on assaying the relationship between the similarity of spatial neural patterns at encoding and later memory performance, similarity of neural patterns in the temporal domain may also reveal encoding computations predictive of future memory. To examine how the similarity among temporal patterns of hippocampal activity during encoding relates to later episodic retrieval (associative cued recall and recognition memory), hippocampal activity was recorded from human participants (n=7) with implanted intracranial electrodes while they encoded arbitrary (A-B) paired-associates. Subsequent memory analyses first revealed that hippocampal high-frequency broadband power (HFB; 70-180Hz) was linked to a graded increase in memory strength; HFB power was greater during the encoding of pairs later correctly recalled relative to events later recognized and was lowest for events later forgotten. Second, and critically, subsequent memory analyses further revealed that more distinctive temporal patterns in the hippocampus during encoding -- indexed by the similarity of the HFB timeseries elicited by a given event to that elicited by other events -- were associated with superior subsequent memory performance. Finally, exploratory analyses revealed stimulus category effects on hippocampal HFB power during encoding and retrieval cuing. These results indicate that the temporal distinctiveness of hippocampal traces during encoding is important for subsequent retrieval of episodic event elements, consistent with theories that posit that pattern separation facilitates future remembering.
Kim, J.; Lee, S.; Nam, K.
Show abstract
A central question in psycholinguistics in visual word recognition is whether morphologically complex words are obligatorily decomposed into stems and affixes during visual word recognition or whether whole-word access can occur when forms are frequent and familiar. The present study investigated how morphological complexity and lexical frequency jointly shape neural responses by leveraging Korean nominal inflection, whose transparent stem-suffix structure permits a clean dissociation between base (stem) frequency and surface (whole-word) frequency. Twenty-five native Korean speakers completed a rapid event-related fMRI lexical decision task involving simple and inflected nouns that varied parametrically in both frequency measures. Representational similarity analysis (RSA) revealed robust encoding of surface frequency--but not base frequency--in the inferior frontal gyrus (IFG) pars opercularis and supramarginal gyrus (SMG), with significantly stronger correlations for inflected than simple nouns. Univariate analyses converged with this result: surface frequency selectively increased activation for inflected nouns in inferior parietal regions, whereas base frequency showed no reliable effects in any ROI. These findings challenge models positing obligatory pre-lexical decomposition, instead supporting accounts in which morphological processing is shaped by post-lexical, usage-driven lexical statistics. Taken together, our findings shed light on a distributed perspective on morphological processing, suggesting that structural and statistical factors jointly constrain access to morphologically complex forms.
Augsten, M.-L.; Lindenbeck, M. J.; Laback, B.
Show abstract
Cochlear implant (CI) users typically experience difficulties perceiving musical harmony due to a restricted spectro-temporal resolution at the electrode-nerve interface, resulting in limited pitch perception. We investigated how stimulus parameters affect discrimination of complex-tone triads (three-voice chords), aiming to identify conditions that maximize perceptual sensitivity. Six post-lingually deafened CI listeners completed a same/different task with harmonic complex tones, while spectral complexity, voice(s) containing a pitch change, and temporal synchrony (simultaneous vs. sequential triad presentation) were manipulated. CI listeners discriminated harmonically relevant one-semitone pitch changes within triads when spectral complexity was reduced to three or five components per voice, with significantly better performance for three-component compared to nine-component tones. Sensitivity was observed for pitch changes in the high voice or in both high and low voices, but not for changes in only the low voice. Single-voice sensitivity predicted simultaneous-triad sensitivity when controlling for spectral complexity and voice with pitch change. Contrary to expectations, sequential triad presentation did not improve discrimination. An analysis of processor pulse patterns suggests that difference-frequency cues encoded in the temporal envelope rather than place-of-excitation cues underlie perceptual triad sensitivity. These findings support reducing spectral complexity to enhance chord discrimination for CI users based on temporal cues.
Zanesco, A. P.; Gross, A. M.; Spivey, D. J.; Stevenson, B. M.; Horn, L. F.; Zanelli, S. R.
Show abstract
Human attention is inherently transient and limited in span to only a few moments without lapsing. The intrinsic dynamics of large-scale neurocognitive networks are thought to contribute to these lapses and result in the unavoidable fluctuations in attention that constrain its span. However, it remains unclear how the millisecond temporal dynamics of specific electrophysiological brain states contribute to the endogenous maintenance of attention or the onset of attentional lapses. In the present study, we investigated whether the strength and millisecond dynamics of brain electric microstates differentiate states of focus from inattention and contribute to the endogenous maintenance of attention over short and long timescales. We recorded 128-channel EEG while participants maintained their attention during the wait time delay of trials in the Sustained Attention to Cue Task (SACT) and segmented the EEG into a categorized time series of microstates based on data-driven clustering of topographic voltage patterns. The findings revealed that the prevalence and rate of occurrence of microstates C and E in the wait time delay of trials differentiated trials in which the target stimulus was correctly detected from incorrectly detected. These same microstates were also implicated in the maintenance of attention over short and long timescales, with their time-varying dynamics changing systematically during the wait time delay of trials and over the course of the task session. Together, these findings demonstrate the sensitivity of microstates to variation in attentional states and suggest that the millisecond dynamics of these brain states contribute to the maintenance of attention over time.
Westner, B. U.; Luo, Y.; Piai, V.
Show abstract
Both episodic memory and word retrieval have been linked to power decreases in the alpha and beta oscillatory bands, but these patterns have rarely been related to each other, partly due to a lack of methodological approaches available. In this explorative study, we investigate the similarities and dissimilarities in the oscillatory fingerprints of the retrieval of words and episodes by directly comparing the activity patterns across time, frequency, and space. We acquired electroencephalography (EEG) data of participants performing a language and an episodic memory task based on the same stimulus material. With a newly developed approach, we directly compared the source-reconstructed oscillatory activity using mutual information and a feature-impact analysis. While left temporal and frontal regions showed dissimilarities between the tasks, right-hemispheric parietal regions exhibited similarities. We speculate that this could indicate a homologous function of these regions, potentially sharing less-specific representations between the tasks. We further uncovered a dissociation of the alpha and beta bands regarding the similarity across tasks. While the beta band was dissimilar between word and episodic memory retrieval, the alpha band seemed to contribute to the similarity we observed in right parietal regions. Whether this points to a task-unspecific function of the alpha band or a functional role in the retrieval process of the presumed representations, remains to be determined. In summary, we present an approach to study similarity across tasks using the temporal, spectral, and spatial dimensions of EEG data, and present results of exploring the shared oscillatory fingerprints between episodic memory and word retrieval.
Tzionit, N.; Filmon, D. G.; Maeir, T.; Boettcher, S. E. P.; Nobre, A. C.; Shalev, N.; Landau, A. N.
Show abstract
Attention-deficit/hyperactivity disorder (ADHD) has been associated with atypical temporal processing across multiple cognitive domains. However, most evidence derives from simplified paradigms that isolate timing from spatial behaviour. Here, we examine how temporal prediction operates within a continuous, dynamic visual environment. Using the Dynamic Visual Search (DVS) task, we embedded spatiotemporal regularities into a sustained stream of visual events, allowing observers to implicitly learn and anticipate predictable targets. Continuous mouse tracking provided a fine-grained measure of action planning beyond discrete reaction time and accuracy metrics. Young adults diagnosed with ADHD (N=40) were compared to matched neurotypical controls (N=38). Both groups benefited from target predictability and reduced distractor load, indicating intact early spatiotemporal learning in ADHD. Across the duration of the task, however, the groups diverged. Neurotypical participants showed progressive increases in behavioural benefits from prediction, accompanied by increasingly direct and efficient mouse trajectories. In contrast, individuals with ADHD reached a plateau in prediction benefits midway through the experiment. Their performance remained stable, with minimal evidence of resource depletion, but did not show further optimisation based on learned regularities. These findings suggest that while prediction formation is preserved in ADHD, its progressive utilisation across longer timescales is attenuated. Rather than reflecting a primary deficit in learning or sustained attention, ADHD may involve altered long-timescale integration or weighting of predictive information in dynamic environments.